Goto

Collaborating Authors

 Nuevo León


Robot Dogs Are on Going on Patrol at the 2026 World Cup in Mexico

WIRED

The Mexican city of Guadalupe, which will host portions of the 2026 World Cup, recently showed off four new robot dogs that will help provide security during matches at BBVA Stadium. The K9-X "robodogs" will help officers patrol during the 2026 World Cup this summer. Authorities in Mexico's Guadalupe, Nuevo León, this week unveiled four robot dogs that will be part of the security devices at BBVA Stadium, one of the three Mexican venues of the 2026 World Cup . The robot dogs are not armed, but each unit incorporates video cameras, night vision, and communication systems that are used to issue warnings or instructions. Its function is to deter illegal activity, detect unusual behavior, identify suspicious objects, control crowds, and immediately alert law enforcement when the system deems necessary. Robot dogs operate semi-autonomously: They do not make decisions or execute movements on their own.


Advancing Multi-Step Mathematical Reasoning in Large Language Models through Multi-Layered Self-Reflection with Auto-Prompting

Loureiro, André de Souza, Valverde-Rebaza, Jorge, Noguez, Julieta, Escarcega, David, Marcacini, Ricardo

arXiv.org Artificial Intelligence

Recent advancements in Large Language Models (LLMs) have significantly improved their problem-solving capabilities. However, these models still struggle when faced with complex multi-step reasoning tasks. In this paper, we propose the Multi-Layered Self-Reflection with Auto-Prompting (MAPS) framework, a novel approach designed to enhance multi-step mathematical reasoning in LLMs by integrating techniques such as Chain of Thought (CoT), Self-Reflection, and Auto-Prompting. Unlike traditional static prompting methods, MAPS employs an iterative refinement process. Initially, the model generates a solution using CoT prompting. When errors are detected, an adaptive self-reflection mechanism identifies and analyzes them, generating tailored prompts to guide corrections. These dynamically adjusted prompts enable the model to iteratively refine its reasoning. Experiments on four well-established benchmarks across multiple LLMs show that MAPS significantly outperforms standard CoT and achieves competitive results with reasoning-optimized models. In addition, MAPS enables general-purpose LLMs to reach performance levels comparable to specialized reasoning models. While deeper reflection layers improve accuracy, they also increase token usage and costs. To balance this trade-off, MAPS strategically limits reflection depth, ensuring an optimal balance between cost and reasoning performance.


MammoRGB: Dual-View Mammogram Synthesis Using Denoising Diffusion Probabilistic Models

Garza-Abdala, Jorge Alberto, Fumagal-González, Gerardo A., Avendano, Daly, Cardona, Servando, Hussain, Sadam, de Avila-Armenta, Eduardo, Toscano-Martínez, Jasiel H., Gurmendi, Diana S. M. Rosales, Pedro-Pérez, Alma A., Tamez-Pena, Jose Gerardo

arXiv.org Artificial Intelligence

Purpose: This study aims to develop and evaluate a three channel denoising diffusion probabilistic model (DDPM) for synthesizing single breast dual view mammograms and to assess the impact of channel representations on image fidelity and cross view consistency. Materials and Methods: A pretrained three channel DDPM, sourced from Hugging Face, was fine tuned on a private dataset of 11020 screening mammograms to generate paired craniocaudal (CC) and mediolateral oblique (MLO) views. Three third channel encodings of the CC and MLO views were evaluated: sum, absolute difference, and zero channel. Each model produced 500 synthetic image pairs. Quantitative assessment involved breast mask segmentation using Intersection over Union (IoU) and Dice Similarity Coefficient (DSC), with distributional comparisons against 2500 real pairs using Earth Movers Distance (EMD) and Kolmogorov Smirnov (KS) tests. Qualitative evaluation included a visual Turing test by a non expert radiologist to assess cross view consistency and artifacts. Results: Synthetic mammograms showed IoU and DSC distributions comparable to real images, with EMD and KS values (0.020 and 0.077 respectively). Models using sum or absolute difference encodings outperformed others in IoU and DSC (p < 0.001), though distributions remained broadly similar. Generated CC and MLO views maintained cross view consistency, with 6 to 8 percent of synthetic images exhibiting artifacts consistent with those in the training data. Conclusion: Three channel DDPMs can generate realistic and anatomically consistent dual view mammograms with promising applications in dataset augmentation.


VocalBench-DF: A Benchmark for Evaluating Speech LLM Robustness to Disfluency

Liu, Hongcheng, Hou, Yixuan, Liu, Heyang, Wang, Yuhao, Wang, Yanfeng, Wang, Yu

arXiv.org Artificial Intelligence

While Speech Large Language Models (Speech-LLMs) show strong performance in many applications, their robustness is critically under-tested, especially to speech disfluency. Existing evaluations often rely on idealized inputs, overlooking common disfluencies, particularly those associated with conditions like Parkinson's disease. This work investigates whether current Speech-LLMs can maintain performance when interacting with users who have speech impairments. To facilitate this inquiry, we introduce VocalBench-DF, a framework for the systematic evaluation of disfluency across a multi-dimensional taxonomy. Our evaluation of 22 mainstream Speech-LLMs reveals substantial performance degradation, indicating that their real-world readiness is limited. Further analysis identifies phoneme-level processing and long-context modeling as primary bottlenecks responsible for these failures. Strengthening recognition and reasoning capability from components and pipelines can substantially improve robustness. These findings highlight the urgent need for new methods to improve disfluency handling and build truly inclusive Speech-LLMs


When and How to Express Empathy in Human-Robot Interaction Scenarios

Cruz, Christian Arzate, Montiel-Vazquez, Edwin C., Maeda, Chikara, Gomez, Randy

arXiv.org Artificial Intelligence

Abstract-- Incorporating empathetic behavior into robots can improve their social effectiveness and interaction quality. In this paper, we present whEE (when and how to express empathy), a framework that enables social robots to detect when empathy is needed and generate appropriate responses. Using large language models, whEE identifies key behavioral empathy cues in human interactions. We evaluate it in human-robot interaction scenarios with our social robot, Haru. Results show that whEE effectively identifies and responds to empathy cues, providing valuable insights for designing social robots capable of adaptively modulating their empathy levels across various interaction contexts. In most scenarios, Large Language Models (LLMs) represent the state-of-the-art approach for classifying empathy [1], [2] and generating empathetic responses [3], [4]. However, the development of robots capable of dynamically adjusting their level of empathy based on the context remains an underexplored area [5]. To this end, we introduce whEE (when and how to express empathy), an empathy framework that provides guidelines on when robots should respond empathetically and how to achieve it. Using our framework, we analyze the utterances of speakers and listeners in dyadic and group conversations with varying levels of empathy. Our analysis identifies key empathy cues that indicate when a speaker seeks an empathetic response and the cues exhibited by listeners displaying high levels of empathy. We approach empathy by focusing on observable behaviors that individuals exhibit when demonstrating an understanding of others' emotions and engaging deeply with their experiences--referred to as behavioral empathy [6].


Detecting Hope Across Languages: Multiclass Classification for Positive Online Discourse

Abiola, T. O., Abiodun, K. D., Olumide, O. E., Adebanji, O. O., Calvo, O. Hiram, Sidorov, Grigori

arXiv.org Artificial Intelligence

The detection of hopeful speech in social media has emerged as a critical task for promoting positive discourse and well-being. In this paper, we present a machine learning approach to multiclass hope speech detection across multiple languages, including English, Urdu, and Spanish. We leverage transformer-based models, specifically XLM-RoBERTa, to detect and categorize hope speech into three distinct classes: Generalized Hope, Realistic Hope, and Unrealistic Hope. Our proposed methodology is evaluated on the PolyHope dataset for the PolyHope-M 2025 shared task, achieving competitive performance across all languages. We compare our results with existing models, demonstrating that our approach significantly outperforms prior state-of-the-art techniques in terms of macro F1 scores. We also discuss the challenges in detecting hope speech in low-resource languages and the potential for improving generalization. This work contributes to the development of multilingual, fine-grained hope speech detection models, which can be applied to enhance positive content moderation and foster supportive online communities.


Multilingual Hope Speech Detection: A Comparative Study of Logistic Regression, mBERT, and XLM-RoBERTa with Active Learning

Abiola, T. O., Abiodun, K. D., Olumide, O. E., Adebanji, O. O., Calvo, O. Hiram, Sidorov, Grigori

arXiv.org Artificial Intelligence

Hope speech language that fosters encouragement and optimism plays a vital role in promoting positive discourse online. However, its detection remains challenging, especially in multilingual and low-resource settings. This paper presents a multilingual framework for hope speech detection using an active learning approach and transformer-based models, including mBERT and XLM-RoBERTa. Experiments were conducted on datasets in English, Spanish, German, and Urdu, including benchmark test sets from recent shared tasks. Our results show that transformer models significantly outperform traditional baselines, with XLM-RoBERTa achieving the highest overall accuracy. Furthermore, our active learning strategy maintained strong performance even with small annotated datasets. This study highlights the effectiveness of combining multilingual transformers with data-efficient training strategies for hope speech detection.


Enhancing Factual Accuracy and Citation Generation in LLMs via Multi-Stage Self-Verification

García, Fernando Gabriela, Shi, Qiyang, Feng, Zilin

arXiv.org Artificial Intelligence

This research introduces VeriFact-CoT (Verified Factual Chain-of-Thought), a novel method designed to address the pervasive issues of hallucination and the absence of credible citation sources in Large Language Models (LLMs) when generating complex, fact-sensitive content. By incorporating a multi-stage mechanism of 'fact verification-reflection-citation integration,' VeriFact-CoT empowers LLMs to critically self-examine and revise their intermediate reasoning steps and final answers. This process significantly enhances the objective accuracy, trustworthiness, and traceability of the generated outputs, making LLMs more reliable for applications demanding high fidelity such as scientific research, news reporting, and legal consultation.


To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms

Haque, AKM Bahalul, Islam, A. K. M. Najmul, Mikalef, Patrick

arXiv.org Artificial Intelligence

AI based social media recommendations have great potential to improve the user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being a black box creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the users perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis.


Nearest-Better Network for Visualizing and Analyzing Combinatorial Optimization Problems: A Unified Tool

Diao, Yiya, Li, Changhe, Zeng, Sanyou, Cai, Xinye, Luo, Wenjian, Yang, Shengxiang, Coello, Carlos A. Coello

arXiv.org Artificial Intelligence

The Nearest-Better Network (NBN) is a powerful method to visualize sampled data for continuous optimization problems while preserving multiple landscape features. However, the calculation of NBN is very time-consuming, and the extension of the method to combinatorial optimization problems is challenging but very important for analyzing the algorithm's behavior. This paper provides a straightforward theoretical derivation showing that the NBN network essentially functions as the maximum probability transition network for algorithms. This paper also presents an efficient NBN computation method with logarithmic linear time complexity to address the time-consuming issue. By applying this efficient NBN algorithm to the OneMax problem and the Traveling Salesman Problem (TSP), we have made several remarkable discoveries for the first time: The fitness landscape of OneMax exhibits neutrality, ruggedness, and modality features. The primary challenges of TSP problems are ruggedness, modality, and deception. Two state-of-the-art TSP algorithms (i.e., EAX and LKH) have limitations when addressing challenges related to modality and deception, respectively. LKH, based on local search operators, fails when there are deceptive solutions near global optima. EAX, which is based on a single population, can efficiently maintain diversity. However, when multiple attraction basins exist, EAX retains individuals within multiple basins simultaneously, reducing inter-basin interaction efficiency and leading to algorithm's stagnation.